2型糖尿病(T2DM)的早期诊断对于及时的治疗干预措施和生活方式改变至关重要。随着医学成像数据在许多患者群体中变得更广泛可用,我们试图研究是否可以在表格学习分类器模型中利用图像衍生的表型数据来预测T2DM的发病率,而无需使用侵入性血液实验室测量。我们表明,使用图像衍生表型的神经网络和决策树模型都可以预测患者T2DM状态的召回评分高达87.6%。我们还提出了与“ Syntha1c编码器”相同的结构的新颖使用,这些结构能够输出模仿血液血红蛋白A1C经验实验室测量值的可解释值。最后,我们证明了T2DM风险预测模型对输入矢量成分中小扰动的敏感性可用于预测从以前看不见的患者人群中取样的协变量的性能。
translated by 谷歌翻译
我们训练一个神经网络模型,以预测宇宙N体模拟的全相空间演化。它的成功表明,神经网络模型正在准确地近似绿色的功能扩展,该功能将模拟的初始条件与其在深层非线性方向上的后期结合到结果。我们通过评估其在具有已知精确解决方案或充分理解扩展的简单情况下的良好理解的简单案例上的表现来测试这种近似值的准确性。这些场景包括球形构型,隔离平面波和两个相互作用的平面波:与用于训练的高斯随机场有很大不同的初始条件。我们发现我们的模型可以很好地推广到这些良好理解的方案,这表明网络已经推断了一般的物理原理,并从复杂的随机高斯训练数据中学习了非线性模式耦合。这些测试还为查找模型的优势和劣势以及确定改进模型的策略提供了有用的诊断。我们还测试了仅包含横向模式的初始条件,该模式的模式不仅在其相位上有所不同,而且还与训练集中使用的纵向生长模式相比。当网络遇到与训练集正交的这些初始条件时,该模型将完全失败。除了这些简单的配置外,我们还评估了模型对N体模拟的标准初始条件的密度,位移和动量功率谱的预测。我们将这些摘要统计数据与N体结果和称为COLA的近似快速模拟方法进行了比较。我们的模型在$ k \ sim 1 \ \ mathrm {mpc}^{ - 1} \,h $的非线性尺度上达到百分比精度,代表了对COLA的显着改进。
translated by 谷歌翻译
我们介绍了GANFormer2模型,这是一个迭代对象的变压器,探讨了生成建模的任务。该网络包含强大明显的结构前导者,以反映视觉场景的组成性质,并通过连续过程合成图像。它以两个阶段运行:快速轻薄的规划阶段,在其中我们起草高级场景布局,然后采用基于关注的执行阶段,其中布局正在改进,进化为丰富和详细的图片。我们的模型远离传统的黑盒GAN架构,该架构具有平坦和单片潜在的空间,朝向透明设计,鼓励效率,可控性和可解释性。我们通过仔细评估在一系列数据集中仔细评估来展示Ganformer2的优势和素质,从多对象CLEVR场景到挑战的Coco图像,显示它在视觉质量,多样性和一致性方面成功实现了最先进的性能。进一步的实验表明了模型的解剖学,并提供了更深入的洞察力进入其生成过程,因为它从粗略的初始草图逐步进行,详细的布局,用于考虑对象的深度和依赖项,最终达到最终的布局决心描绘充满活力和复杂的现实世界场景。有关模型实现,请参阅https://github.com/dorarad/gansformer。
translated by 谷歌翻译
深度学习表明,最近在胸部X射线中对异常进行分类方面的成功,但是与自然图像数据集相比,数据集仍然很小。对异常本地化的监督已被证明可以改善训练有素的模型,部分补偿了数据集大小。但是,明确标记这些异常需要专家,并且非常耗时。我们提出了一种潜在的可扩展方法,用于使用眼动物跟踪器收集隐式定位数据,以捕获注视位置和麦克风来捕获报告的概念,从而模仿阅读室的设置。由五位放射科医生标记了所得的反射式(报告和眼睛跟踪数据,用于胸部X射线异常的定位)数据集,并包含3,032个同步的眼球跟踪数据集和时间戳报告的同步集,并从模拟的报告中进行了2,616胸部X射线的转录。 CXR数据集。我们还提供辅助注释,包括围绕肺和心脏的边界框以及由椭圆形成的椭圆形成异常和图像级标签的验证标签。此外,数据的一小部分包含所有放射科医生的读数,从而可以计算评估者分数。
translated by 谷歌翻译
AI正在经历范式转变,随着模型的兴起(例如Bert,Dall-E,GPT-3),这些模型经过大规模的数据训练,并且可以适应广泛的下游任务。我们称这些模型基础模型来强调其至关重要但不完整的特征。该报告提供了基础模型的机会和风险的详尽说明,包括其功能(例如语言,愿景,机器人技术,推理,人类互动)和技术原则(例如,模型架构,培训程序,数据,系统,安全,安全性,评估,理论)对其应用(例如法律,医疗保健,教育)和社会影响(例如不平等,滥用,经济和环境影响,法律和道德考虑)。尽管基础模型基于标准的深度学习和转移学习,但它们的规模导致了新的新兴能力,以及它们在许多任务中的有效性都激发了同质化。同质化提供了强大的杠杆作用,但要求谨慎,因为基础模型的缺陷均由下游的所有适应模型继承。尽管即将广泛地部署基础模型,但我们目前对它们的工作方式,失败以及由于其新兴属性的影响而缺乏清晰的了解。为了解决这些问题,我们认为基础模型的许多批判性研究都需要与他们的基本社会技术性质相称。
translated by 谷歌翻译
We introduce GQA, a new dataset for real-world visual reasoning and compositional question answering, seeking to address key shortcomings of previous VQA datasets. We have developed a strong and robust question engine that leverages Visual Genome scene graph structures to create 22M diverse reasoning questions, which all come with functional programs that represent their semantics. We use the programs to gain tight control over the answer distribution and present a new tunable smoothing technique to mitigate question biases. Accompanying the dataset is a suite of new metrics that evaluate essential qualities such as consistency, grounding and plausibility. A careful analysis is performed for baselines as well as state-of-the-art models, providing fine-grained results for different question types and topologies. Whereas a blind LSTM obtains a mere 42.1%, and strong VQA models achieve 54.1%, human performance tops at 89.3%, offering ample opportunity for new research to explore. We hope GQA will provide an enabling resource for the next generation of models with enhanced robustness, improved consistency, and deeper semantic understanding of vision and language.
translated by 谷歌翻译
Variational inference uses optimization, rather than integration, to approximate the marginal likelihood, and thereby the posterior, in a Bayesian model. Thanks to advances in computational scalability made in the last decade, variational inference is now the preferred choice for many high-dimensional models and large datasets. This tutorial introduces variational inference from the parametric perspective that dominates these recent developments, in contrast to the mean-field perspective commonly found in other introductory texts.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译